Search results for "explainable artificial intelligence"

showing 7 items of 7 documents

Robustness, Stability, and Fidelity of Explanations for a Deep Skin Cancer Classification Model

2022

Skin cancer is one of the most prevalent of all cancers. Because of its being widespread and externally observable, there is a potential that machine learning models integrated into artificial intelligence systems will allow self-screening and automatic analysis in the future. Especially, the recent success of various deep machine learning models shows promise that, in the future, patients could self-analyse their external signs of skin cancer by uploading pictures of these signs to an artificial intelligence system, which runs such a deep learning model and returns the classification results. However, both patients and dermatologists, who might use such a system to aid their work, need to …

Fluid Flow and Transfer Processesexplainable artificial intelligenceskin cancerProcess Chemistry and TechnologyGeneral Engineeringconvolutional neural networkdeep learningsyväoppimineninterpretable machine learningpäätöksentukijärjestelmätneuroverkotdiagnostiikkaComputer Science Applicationsihosyöpälocal model-agnostic explanationskoneoppiminenGeneral Materials ScienceInstrumentationexplainable artificial intelligence; interpretable machine learning; skin cancer; convolutional neural network; deep learning; integrated gradients; local model-agnostic explanationsintegrated gradientsApplied Sciences
researchProduct

Explainable Student Agency Analytics

2021

Several studies have shown that complex nonlinear learning analytics (LA) techniques outperform the traditional ones. However, the actual integration of these techniques in automatic LA systems remains rare because they are generally presumed to be opaque. At the same time, the current reviews on LA in higher education point out that LA should be more grounded to the learning science with actual linkage to teachers and pedagogical planning. In this study, we aim to address these two challenges. First, we discuss different techniques that open up the decision-making process of complex techniques and how they can be integrated in LA tools. More precisely, we present various global and local e…

General Computer ScienceHigher educationComputer scienceProcess (engineering)päätöksentekoLearning analyticstekoälyoppimisanalytiikkadecision makingkorkeakouluopetusData modelingAgency (sociology)ComputingMilieux_COMPUTERSANDEDUCATIONGeneral Materials ScienceElectrical and Electronic Engineeringexplainable artificial intelligenceopiskelijatPoint (typography)business.industrypalauteGeneral EngineeringtoimijuusoppimisalustatData scienceLearning sciencesTK1-9971Analyticshigher educationkorkeakouluopiskelustudent agencyElectrical engineering. Electronics. Nuclear engineeringExplainable artificial intelligencebusinessarviointiIEEE Access
researchProduct

Classification and Automated Interpretation of Spinal Posture Data Using a Pathology-Independent Classifier and Explainable Artificial Intelligence (…

2021

Clinical classification models are mostly pathology-dependent and, thus, are only able to detect pathologies they have been trained for. Research is needed regarding pathology-independent classifiers and their interpretation. Hence, our aim is to develop a pathology-independent classifier that provides prediction probabilities and explanations of the classification decisions. Spinal posture data of healthy subjects and various pathologies (back pain, spinal fusion, osteoarthritis), as well as synthetic data, were used for modeling. A one-class support vector machine was used as a pathology-independent classifier. The outputs were transformed into a probability distribution according to Plat…

Support Vector MachineComputer sciencePostureback painTP1-1185BiochemistryspineSynthetic dataArticlebiomechanicsAnalytical ChemistryMachine LearningClassifier (linguistics)Back painmedicineHumansElectrical and Electronic Engineeringddc:796InstrumentationInterpretation (logic)explainable artificial intelligenceOrientation (computer vision)business.industryChemical technologydata miningartificial intelligenceAtomic and Molecular Physics and OpticsSupport vector machineosteoarthritismachine learningBinary classificationspinal fusionProbability distributionArtificial intelligencemedicine.symptombusinessSensors (Basel, Switzerland)
researchProduct

Expliquer le comportement de robots distants à des utilisateurs humains : une approche orientée-agent

2020

With the widespread use of Artificial Intelligence (AI) systems, understanding the behavior of intelligent agents and robots is crucial to guarantee smooth human-agent collaboration since it is not straightforward for humans to understand the agent’s state of mind. Recent studies in the goal-driven Explainable AI (XAI) domain have confirmed that explaining the agent’s behavior to humans fosters the latter’s understandability of the agent and increases its acceptability. However, providing overwhelming or unnecessary information may also confuse human users and cause misunderstandings. For these reasons, the parsimony of explanations has been outlined as one of the key features facilitating …

[SPI.OTHER]Engineering Sciences [physics]/OtherHuman-Computer InteractionExplainable Artificial IntelligenceIntelligence artificielle explicable[SPI.OTHER] Engineering Sciences [physics]/OtherMulti-Agent SystemsSystèmes multi-AgentsInteraction homme-Machine
researchProduct

Comparison of feature importance measures as explanations for classification models

2021

AbstractExplainable artificial intelligence is an emerging research direction helping the user or developer of machine learning models understand why models behave the way they do. The most popular explanation technique is feature importance. However, there are several different approaches how feature importances are being measured, most notably global and local. In this study we compare different feature importance measures using both linear (logistic regression with L1 penalization) and non-linear (random forest) methods and local interpretable model-agnostic explanations on top of them. These methods are applied to two datasets from the medical domain, the openly available breast cancer …

feature importanceComputer scienceGeneral Chemical EngineeringGeneral Physics and Astronomy02 engineering and technologyinterpretable modelstekoälyMachine learningcomputer.software_genreLogistic regressionDomain (software engineering)020204 information systems0202 electrical engineering electronic engineering information engineeringFeature (machine learning)General Materials ScienceGeneral Environmental Scienceluokitus (toiminta)explainable artificial intelligencebusiness.industrylogistic regressionGeneral EngineeringRandom forestkoneoppiminenTrustworthinessInjury dataGeneral Earth and Planetary Sciences020201 artificial intelligence & image processingArtificial intelligencebusinesscomputerrandom forestSN Applied Sciences
researchProduct

Towards explainable interactive multiobjective optimization : R-XIMO

2022

AbstractIn interactive multiobjective optimization methods, the preferences of a decision maker are incorporated in a solution process to find solutions of interest for problems with multiple conflicting objectives. Since multiple solutions exist for these problems with various trade-offs, preferences are crucial to identify the best solution(s). However, it is not necessarily clear to the decision maker how the preferences lead to particular solutions and, by introducing explanations to interactive multiobjective optimization methods, we promote a novel paradigm of explainable interactive multiobjective optimization. As a proof of concept, we introduce a new method, R-XIMO, which provides …

johtaminenexplainable artificial intelligencepäätöksentekometsänkäsittelypäätöksentukijärjestelmätinteractive methodstekoälymonitavoiteoptimointidecision makingkoneoppiminenoptimointiArtificial Intelligenceinteraktiivisuusmultiple criteria optimizationreference point
researchProduct

Explainable AI for Industry 4.0 : Semantic Representation of Deep Learning Models

2022

Artificial Intelligence is an important asset of Industry 4.0. Current discoveries within machine learning and particularly in deep learning enable qualitative change within the industrial processes, applications, systems and products. However, there is an important challenge related to explainability of (and, therefore, trust to) the decisions made by the deep learning models (aka black-boxes) and their poor capacity for being integrated with each other. Explainable artificial intelligence is needed instead but without loss of effectiveness of the deep learning models. In this paper we present the transformation technique between black-box models and explainable (as well as interoperable) …

predictive maintenancesemantic webkoneoppiminenExplainable Artificial Intelligenceylläpitokunnonvalvontasyväoppiminenteollisuustekoälysemanttinen webIndustry 4.0tuotantotekniikka
researchProduct